Goto

Collaborating Authors

 classical shadow


A Study on Stabilizer Rényi Entropy Estimation using Machine Learning

Lipardi, Vincenzo, Dibenedetto, Domenica, Stamoulis, Georgios, Winands, Mark H. M.

arXiv.org Artificial Intelligence

Nonstabilizerness is a fundamental resource for quantum advantage, as it quantifies the extent to which a quantum state diverges from those states that can be efficiently simulated on a classical computer, the stabilizer states. The stabilizer Rényi entropy (SRE) is one of the most investigated measures of nonstabilizerness because of its computational properties and suitability for experimental measurements on quantum processors. Because computing the SRE for arbitrary quantum states is a computationally hard problem, we propose a supervised machine-learning approach to estimate it. In this work, we frame SRE estimation as a regression task and train a Random Forest Regressor and a Support Vector Regressor (SVR) on a comprehensive dataset, including both unstructured random quantum circuits and structured circuits derived from the physics-motivated one-dimensional transverse Ising model (TIM). We compare the machine-learning models using two different quantum circuit representations: one based on classical shadows and the other on circuit-level features. Furthermore, we assess the generalization capabilities of the models on out-of-distribution instances. Experimental results show that an SVR trained on circuit-level features achieves the best overall performance. On the random circuits dataset, our approach converges to accurate SRE estimations, but struggles to generalize out of distribution. In contrast, it generalizes well on the structured TIM dataset, even to deeper and larger circuits. In line with previous work, our experiments suggest that machine learning offers a viable path for efficient nonstabilizerness estimation.


Learning quantum many-body data locally: A provably scalable framework

Chinzei, Koki, Tran, Quoc Hoan, Matsumoto, Norifumi, Endo, Yasuhiro, Oshima, Hirotaka

arXiv.org Artificial Intelligence

Quantum Laboratory, Fujitsu Research, Fujitsu Limited, 4-1-1 Kawasaki, Kanagawa 211-8588, Japan (Dated: September 18, 2025) Machine learning (ML) holds great promise for extracting insights from complex quantum many-body data obtained in quantum experiments. This approach can efficiently solve certain quantum problems that are classically intractable, suggesting potential advantages of harnessing quantum data. However, addressing large-scale problems still requires significant amounts of data beyond the limited computational resources of near-term quantum devices. We propose a scalable ML framework called Geometrically Local Quantum Kernel (GLQK), designed to efficiently learn quantum many-body experimental data by leveraging the exponential decay of correlations, a phenomenon prevalent in noncritical systems. In the task of learning an unknown polynomial of quantum expectation values, we rigorously prove that GLQK substantially improves polynomial sample complexity in the number of qubits n, compared to the existing shadow kernel, by constructing a feature space from local quantum information at the correlation length scale. This improvement is particularly notable when each term of the target polynomial involves few local subsystems. Remarkably, for translationally symmetric data, GLQK achieves constant sample complexity, independent of n. We numerically demonstrate its high scalability in two learning tasks on quantum many-body phenomena. These results establish new avenues for utilizing experimental data to advance the understanding of quantum many-body physics. Understanding complex quantum many-body phenomena is a pivotal challenge across various fields, including physics, chemistry, and biology. Classical computational approaches often struggle to capture the intricate interplay of interactions in these systems due to the exponential dimensionality of the Hilbert space. Recent advances in experimental control over quantum systems offer a promising avenue for probing these phenomena.


Rethink the Role of Deep Learning towards Large-scale Quantum Systems

Zhao, Yusheng, Zhang, Chi, Du, Yuxuan

arXiv.org Artificial Intelligence

Characterizing the ground state properties of quantum systems is fundamental to capturing their behavior but computationally challenging. Recent advances in AI have introduced novel approaches, with diverse machine learning (ML) and deep learning (DL) models proposed for this purpose. However, the necessity and specific role of DL models in these tasks remain unclear, as prior studies often employ varied or impractical quantum resources to construct datasets, resulting in unfair comparisons. To address this, we systematically benchmark DL models against traditional ML approaches across three families of Hamiltonian, scaling up to 127 qubits in three crucial ground-state learning tasks while enforcing equivalent quantum resource usage. Our results reveal that ML models often achieve performance comparable to or even exceeding that of DL approaches across all tasks. Furthermore, a randomization test demonstrates that measurement input features have minimal impact on DL models' prediction performance. These findings challenge the necessity of current DL models in many quantum system learning scenarios and provide valuable insights into their effective utilization.


Classical Shadows with Improved Median-of-Means Estimation

Fu, Winston, Koh, Dax Enshan, Goh, Siong Thye, Kong, Jian Feng

arXiv.org Machine Learning

The classical shadows protocol, introduced by Huang et al. [Nat. Phys. 16, 1050 (2020)], makes use of the median-of-means (MoM) estimator to efficiently estimate the expectation values of $M$ observables with failure probability $\delta$ using only $\mathcal{O}(\log(M/\delta))$ measurements. In their analysis, Huang et al. used loose constants in their asymptotic performance bounds for simplicity. However, the specific values of these constants can significantly affect the number of shots used in practical implementations. To address this, we studied a modified MoM estimator proposed by Minsker [PMLR 195, 5925 (2023)] that uses optimal constants and involves a U-statistic over the data set. For efficient estimation, we implemented two types of incomplete U-statistics estimators, the first based on random sampling and the second based on cyclically permuted sampling. We compared the performance of the original and modified estimators when used with the classical shadows protocol with single-qubit Clifford unitaries (Pauli measurements) for an Ising spin chain, and global Clifford unitaries (Clifford measurements) for the Greenberger-Horne-Zeilinger (GHZ) state. While the original estimator outperformed the modified estimators for Pauli measurements, the modified estimators showed improved performance over the original estimator for Clifford measurements. Our findings highlight the importance of tailoring estimators to specific measurement settings to optimize the performance of the classical shadows protocol in practical applications.


Predicting adaptively chosen observables in quantum systems

Huang, Jerry, Lewis, Laura, Huang, Hsin-Yuan, Preskill, John

arXiv.org Artificial Intelligence

Recent advances have demonstrated that $\mathcal{O}(\log M)$ measurements suffice to predict $M$ properties of arbitrarily large quantum many-body systems. However, these remarkable findings assume that the properties to be predicted are chosen independently of the data. This assumption can be violated in practice, where scientists adaptively select properties after looking at previous predictions. This work investigates the adaptive setting for three classes of observables: local, Pauli, and bounded-Frobenius-norm observables. We prove that $\Omega(\sqrt{M})$ samples of an arbitrarily large unknown quantum state are necessary to predict expectation values of $M$ adaptively chosen local and Pauli observables. We also present computationally-efficient algorithms that achieve this information-theoretic lower bound. In contrast, for bounded-Frobenius-norm observables, we devise an algorithm requiring only $\mathcal{O}(\log M)$ samples, independent of system size. Our results highlight the potential pitfalls of adaptivity in analyzing data from quantum experiments and provide new algorithmic tools to safeguard against erroneous predictions in quantum experiments.


Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits

Du, Yuxuan, Hsieh, Min-Hsiu, Tao, Dacheng

arXiv.org Artificial Intelligence

The vast and complicated large-qubit state space forbids us to comprehensively capture the dynamics of modern quantum computers via classical simulations or quantum tomography. However, recent progress in quantum learning theory invokes a crucial question: given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties using new classical inputs, after learning from data obtained by incoherently measuring states generated by the same circuit but with different classical inputs? In this work, we prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d. Building upon these derived complexity bounds, we further harness the concept of classical shadow and truncated trigonometric expansion to devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to polynomial scaling in many practical settings. Our results advance two crucial realms in quantum computation: the exploration of quantum algorithms with practical utilities and learning-based quantum system certification. We conduct numerical simulations to validate our proposals across diverse scenarios, encompassing quantum information processing protocols, Hamiltonian simulation, and variational quantum algorithms up to 60 qubits.


Optimal high-precision shadow estimation

Chen, Sitan, Li, Jerry, Liu, Allen

arXiv.org Artificial Intelligence

We give the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space. Formally we give a protocol that, given any $m\in\mathbb{N}$ and $\epsilon \le O(d^{-12})$, measures $O(\log(m)/\epsilon^2)$ copies of an unknown mixed state $\rho\in\mathbb{C}^{d\times d}$ and outputs a classical description of $\rho$ which can then be used to estimate any collection of $m$ observables to within additive accuracy $\epsilon$. Previously, even for the simpler task of shadow tomography -- where the $m$ observables are known in advance -- the best known rates either scaled benignly but suboptimally in all of $m, d, \epsilon$, or scaled optimally in $\epsilon, m$ but had additional polynomial factors in $d$ for general observables. Intriguingly, we also show via dimensionality reduction, that we can rescale $\epsilon$ and $d$ to reduce to the regime where $\epsilon \le O(d^{-1/2})$. Our algorithm draws upon representation-theoretic tools recently developed in the context of full state tomography.


Learning topological states from randomized measurements using variational tensor network tomography

Teng, Yanting, Samajdar, Rhine, Van Kirk, Katherine, Wilde, Frederik, Sachdev, Subir, Eisert, Jens, Sweke, Ryan, Najafi, Khadijeh

arXiv.org Machine Learning

Learning faithful representations of quantum states is crucial to fully characterizing the variety of many-body states created on quantum processors. While various tomographic methods such as classical shadow and MPS tomography have shown promise in characterizing a wide class of quantum states, they face unique limitations in detecting topologically ordered two-dimensional states. To address this problem, we implement and study a heuristic tomographic method that combines variational optimization on tensor networks with randomized measurement techniques. Using this approach, we demonstrate its ability to learn the ground state of the surface code Hamiltonian as well as an experimentally realizable quantum spin liquid state. In particular, we perform numerical experiments using MPS ans\"atze and systematically investigate the sample complexity required to achieve high fidelities for systems of sizes up to $48$ qubits. In addition, we provide theoretical insights into the scaling of our learning algorithm by analyzing the statistical properties of maximum likelihood estimation. Notably, our method is sample-efficient and experimentally friendly, only requiring snapshots of the quantum state measured randomly in the $X$ or $Z$ bases. Using this subset of measurements, our approach can effectively learn any real pure states represented by tensor networks, and we rigorously prove that random-$XZ$ measurements are tomographically complete for such states.


Principal eigenstate classical shadows

Grier, Daniel, Pashayan, Hakop, Schaeffer, Luke

arXiv.org Artificial Intelligence

Given many copies of an unknown quantum state $\rho$, we consider the task of learning a classical description of its principal eigenstate. Namely, assuming that $\rho$ has an eigenstate $|\phi\rangle$ with (unknown) eigenvalue $\lambda > 1/2$, the goal is to learn a (classical shadows style) classical description of $|\phi\rangle$ which can later be used to estimate expectation values $\langle \phi |O| \phi \rangle$ for any $O$ in some class of observables. We consider the sample-complexity setting in which generating a copy of $\rho$ is expensive, but joint measurements on many copies of the state are possible. We present a protocol for this task scaling with the principal eigenvalue $\lambda$ and show that it is optimal within a space of natural approaches, e.g., applying quantum state purification followed by a single-copy classical shadows scheme. Furthermore, when $\lambda$ is sufficiently close to $1$, the performance of our algorithm is optimal--matching the sample complexity for pure state classical shadows.


Multimodal deep representation learning for quantum cross-platform verification

Qian, Yang, Du, Yuxuan, He, Zhenliang, Hsieh, Min-hsiu, Tao, Dacheng

arXiv.org Artificial Intelligence

Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms, utilizing minimal measurements. While the random measurement approach has been instrumental in this context, the quasi-exponential computational demand with increasing qubit count hurdles its feasibility in large-qubit scenarios. To bridge this knowledge gap, here we introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities: measurement outcomes and classical description of compiled circuits on explored quantum devices, both enriched with unique information. Building upon this insight, we devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation. The learned representation can effectively characterize the similarity between the explored quantum devices when executing new quantum algorithms not present in the training data. We evaluate our proposal on platforms featuring diverse noise models, encompassing system sizes up to 50 qubits. The achieved results demonstrate a three-orders-of-magnitude improvement in prediction accuracy compared to the random measurements and offer compelling evidence of the complementary roles played by each modality in cross-platform verification. These findings pave the way for harnessing the power of multimodal learning to overcome challenges in wider quantum system learning tasks.